Intelligent emotion recognition from facial and whole-body expressions using adaptive ensemble models
نویسنده
چکیده
Automatic emotion recognition has been widely studied and applied to various computer vision tasks (e.g. health monitoring, driver state surveillance, personalized learning, and security monitoring). With the great potential provided by current advanced 3D scanners technology (e.g. the Kinect), we shed light on robust emotion recognition based one users’ facial and whole-body expressions. As revealed by recent psychological and behavioral research, facial expressions are good in communicating categorical emotions (e.g. happy, sad, surprise, etc.), while bodily expressions could contribute more to the perception of dimensional emotional states (e.g. the arousal and valence dimensions). Thus, we propose two novel emotion recognition systems respectively applying adaptive ensemble classification and regression models respectively based on the facial and bodily modalities. The proposed real-time 3D facial Action Unit (AU) intensity estimation and emotion recognition system automatically selects 16 motion-based facial feature sets to estimate the intensities of 16 diagnostic AUs. Then a set of six novel adaptive ensemble classifiers are proposed for robust classification of the six basic emotions and the detection of newly arrived unseen novel emotion classes (emotions that are not included in the training set). In both offline-line and on-line real-time evaluation, the system shows the highest recognition accuracy in comparison with other related work and flexibility and good adaptation for newly arrived novel emotion detection(e.g. ‘contempt’ which is not included in the six basic emotions). The second system focuses on continuous and dimensional affect prediction from users’ bodily expressions using adaptive regression. Both static posture and dynamic motion bodily features are extracted and subsequently selected by a Genetic Algorithm to identify their most discriminative combinations for both valence and arousal dimensions. Then an adaptive ensemble regression model is proposed to robustly map subjects’ emotional states onto a continuous arousal-valence affective space using the identified feature subsets. Experimental results show that the proposed system outperforms other benchmark models and achieves promising performance compared to other state-of-the-art research reported in the literature. Furthermore, we also propose a novel semi-feature level bimodal fusion framework that integrates both facial and bodily information together to draw a more comprehensive and robust dimensional interpretation of subjects’ emotional states. By combining the optimal discriminative bodily features and the derived AU intensities as inputs, the proposed adaptive ensemble regression model achieves remarkable improvements in comparison to solely applying the bodily features.
منابع مشابه
Seeing Fearful Body Expressions Activates the Fusiform Cortex and Amygdala
Darwin's evolutionary approach to organisms' emotional states attributes a prominent role to expressions of emotion in whole-body actions. Researchers in social psychology [1,2] and human development [3] have long emphasized the fact that emotional states are expressed through body movement, but cognitive neuroscientists have almost exclusively considered isolated facial expressions (for review...
متن کاملAdaptive Rule-Based Facial Expression Recognition
This paper addresses the problem of emotion recognition in faces through an intelligent neuro-fuzzy system, which is capable of analysing facial features extracted following the MPEG-4 standard and classifying facial images according to the underlying emotional states, following rules derived from expression profiles. Results are presented which illustrate the capability of the developed system...
متن کاملEmotion Recognition from Facial Expressions using Multilevel HMM
Human-computer intelligent interaction (HCII) is an emerging field of science aimed at providing natural ways for humans to use computers as aids. It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. The most expressive way humans display emo...
متن کاملIdentification of Facial Gestures and Audio Visual Interactions Using Ensemble Matrix of Multi Classifiers
Human emotions are expressed using the facial expressions, the tone of voice, hands and body gestures .An approach of interaction between the computer-human must be accurate and robust. We use the concept of multi classifier systems to study the human emotions, audio-visual detection system. Automate recognition of emotional state, machines must be taught expressions to understand facial gestur...
متن کاملAnalysis and Synthesis of Facial Expressions by Feature-Points Tracking and Deformable Model
Face expression recognition is useful for designing new interactive devices offering the possibility of new ways for human to interact with computer systems. In this paper we develop a facial expressions analysis and synthesis system. The analysis part of the system is based on the facial features extracted from facial feature points (FFP) in frontal image sequences. Selected facial feature poi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015